Test Case:
  A test case is a document, which has a set of test data, preconditions, expected results and postconditions, developed for a particular test scenario in order to verify compliance against a specific requirement.

	OR

It is a dynamic, step by step procedural doc contains Steps, expected results, pre-condition, test data etc to validate the functionality of the given requirements.


Q: Advantages of writing the test cases?
1. Testcases provides Reusability
2. Better test coverage.
3. Consistency in test execution
4. Helps to find more defects which leads to high quality of the s/w.
5. A new comer can easily rampup on the s/w with the help of test cases.
6. Resource dependency will be avoided.
7. It is a base for the testing of the given functionality or application.


Q: What If no test cases are written?
1. There no reusability
2. No consistency in test execution
3. No better test coverage.
4. Fail to find OR miss to find the defects in the s/w
5. It is very difficult for the new comer to rampup on the application
6. Resource dependency will be more.
7. We cannot achieve better quality in the s/w application.
8. There is no base to test the s/w or application

==========================================================================================================================
Q: Points to be considered while writing test cases?
1. All the test cases should have all its complete steps.
2. We should write both +ve and -ve test cases.
3. Don't explain all the steps. Explain/Elaborate only those steps for which you are writting the test cases.
4. All the important words OR sentences should be highlighted either by underlining them, making them bold etc
5. All the steps in the test case must have expected results.
6. There should be a proper mapping for requirement ID's and test case ID's for easy tracking.
7. Should have appropriate pre-conditions whenever required.
8. All the test cases should be given with priority (Ex: High, Medium & Low test cases etc)
9. Zero Assumptions: Test cases should not contain assumed data, and don’t come up with features/modules that don’t exist.
10. Different input data: While writing test cases, all types of data must be taken into consideration.
11. Maximum conditions: All kinds of conditions should be taken into consideration while writing a test, increasing the effectiveness.
12. Always use should/must while writing expected results.
===========================================================================================================================
Q: Difference between Test Case and Test Scenario?
Ans:

Definition:
(a) A test case is a defined format for software testing required to check if a particular application/software/module is working or not. Here we check for different conditions regarding the same.

(a) The test Scenario provides a small description of what needs to be performed based on the use case.


Level of detailing:
(b) Test cases are more detailed with several parameters.
(b) Test Scenario provides a small description, mostly one-line statements.

Action Level:
(c) Test cases are low-level actions.
(c) Test scenarios are high-level actions.

Derived from:
(d) Test cases are mostly derived from test scenarios.
(d) Test scenarios are derived from documents like BRS, SRS, etc.

Objective:
(e) It focuses on "What to test" and "How to test".
(e) It focuses more on "What to test".

Resources required:
(f) Test cases require more resources for documentation and execution.
(f) Fewer resources are required to write test scenarios.

Inputs:
(g) It includes all positive and negative inputs, expected results, navigation steps, etc.
(g) They are one-liner statements.

Time requirement:
(h) It requires more time compared to test scenarios.
(h) Test scenarios require less time.

Maintenance:
(i) They are hard to maintain.
(i) They require less time to maintain.

===========================================================================================================================
Test case Design Technique: (Refer the diagram "Test Case-Design-Techniques.jpg" for the same)

I. Specification-Based or Black-Box Techniques:
Specification-based testing, also known as black-box testing, is a technique that concentrates on testing the software system based on its functioning without any knowledge regarding the underlying code or structure. 
It is an input/output-based testing technique because it considers the software as a black box with inputs and outputs. It provides some inputs to the software and then compares the outputs produced with the desired outputs.
This technique is further classified as:

(a) Boundary Value Analysis (BVA)
	In this technique, we design our test cases for the testing at or near the border values. Test cases often contain values from the upper boundary and the lower boundary. To make our task more manageable, BVA offers us four main test scenarios, as mentioned below.

->Minimum
->Just below the min boundary values
->Maximum
->Just above the max boundary values

Ex:
Test Condition: A valid adult user must be between [18 and 59 yrs], both inclusive.

Expected Behaviour: If the input age is not valid (i.e. age < 18 or age >=60), then the form must prompt a "To register, you must be between 18 yrs to 59 yrs." alert.
Hence, based on the four test scenarios for BVA, we can design test cases as – 18(Minimum), 
17 (just below the minimum), 
59(maximum), 
and 60(just above the maximum).

So we need to design test cases as follows:
Test Case 1: (Just Below Minimum)
Input: age = 17
Expected Output: "To register you must be between 18 yrs to 59 yrs." alert popups.

Test Case 2: (Minimum)
Input: age = 18
Expected Output: Proceed to Registration.

Test Case 3: (Maximum)
Input: age = 59
Expected Output: Proceed to Registration.

Test Case 4: (Just Above Maximum)
Input: age = 60
Expected Output: "To register, you must be between 18 yrs to 59 yrs." alert popups.
*****************************************

(b) Equivalence Partitioning (EP)
	In the Equivalence Partitioning technique for testing, the entire range of input data is split into separate partitions.  All imaginable test cases are assessed and divided into logical sets of data named classes. One random test value is selected from each class during test execution.
	
	To test the functionality of the user age from the input form (i.e., it must accept the age between 18 to 59, both inclusive; otherwise, produce an error alert), we will first find all the possible similar types of inputs to test and then place them into separate classes. In this case, we can divide our test cases into three groups or classes:

Age < 18 – Invalid – ( For e.g. 1, 2, 3, 4, …, up to 17).
18 <= Age <= 59 – Valid – ( For e.g. 18, 19, 20, …, upto 59).
Age > 59 – Invalid – (For e.g. 60, 61, 62, 63, …)

These designed test cases are too much for testing, aren’t they? But here lies the beauty of Equivalence testing. We have infinite test cases to pick, but we only need to test one value from each class. This reduces the number of tests we need to perform but increases our test coverage. So, we can perform these tests for a definite number only, and the test value will be picked randomly from each class and track the expected behavior for each input.
********************************************
	
(c) Decision Table Testing
	A Decision Table is a technique that demonstrates the relationship between the inputs provided, rules, output, and test conditions. In test case design techniques, this decision table is a very efficient way for complex test cases. The decision table allows automation testers or developers to inspect all credible combinations of conditions for testing. True(T) and False(F) values signify the criteria.

Decision table testing is a test case design technique that examines how the software responds to different input combinations. In this technique, various input combinations or test cases and the concurrent system behavior (or output) are tabulated to form a decision table. That’s why it is also known as a Cause/Effect table, as it captures both the cause and effect for enhanced test coverage.

Automation testers or developers mainly use this technique to make test cases for complex tasks that involve lots of conditions to be checked. To understand the Decision table testing technique better, let’s consider a real-world example to test an upload image feature in the software system.

Test Name: Test upload image form.

Test Condition: Upload option must upload image(JPEG) only and that too of size less than 1 Mb.

Expected Behaviour: If the input is not an image or not less than 1 Mb in size, then it must pop an "invalid Image size" alert; otherwise, it must accept the upload.

Test Cases using Decision Table Testing:

Based upon our testing conditions, we should test our software system for the following conditions:
The uploaded Image must be in JPEG format.
Image size must be less than 1 Mb.


If any of the conditions are not satisfied, the software system will display an "invalid input" alert, and if all requirements are met, the image will be correctly uploaded.
Now, let’s try to make the decision table to design the most suitable test cases.

Conditions		Test case 1				Test case 2				Test Case 3					Test case 4
Image Format	.jpg/.jpeg (T)			.jpg/.jpeg (T)			Not .jpg/.jpeg (F)			Not .jpg/.jpeg (F)
Image Size		< 1Mb (T)				>= 1 Mb (F)				< 1 Mb (T)					>= 1Mb
Output			Upload image (T)		Invalid input (F)		Invalid input (F)			Invalid input (F)


Based on the above-formed decision table, we can develop 4 separate test cases to guarantee comprehensive coverage for all the necessary conditions.

********************************************
(d) State Transition Diagrams
	State, Transition Diagram Testing, is a software testing technique used to check the transformation in the state of the application or software under changing input. The requirement of the input passed is varied, and the change in the state of the application is observed.

In test case design techniques, State Transition Testing is mainly carried out to monitor the behavior of the application or software for various input conditions passed in a sequential manner. In this type of testing, negative and positive input values are passed, and the behavior of the application or software is observed.

To perform State transition testing efficiently on complex systems, we take the help of the State transition diagram or State transition table. The state transition Diagram or state transition table mainly represents the relation between the change in input and the behavior of the application

State Transition Testing can only be performed where different system transitions are required to be tested. A great real-world example where State Transition Testing is performed can be an ATM machine.

Let’s understand this testing technique by considering an example of a mobile passcode verification system.

Test Name: Test the passcode verification system of a mobile to unlock it.

Expected Behavior: The Phone must unlock when the user enters a correct passcode otherwise, it will display an incorrect password message. Also, if 3 consecutive times an incorrect passcode is entered, the device will go on a cooling period for 60 sec to prevent a brute force attack.

<See the diagram "State-Transition-Diagram.png" for more details:>

In the state transition diagram, if the user enters the correct passcode in the first three attempts, he is transferred to the Device unlock state, but if he enters a wrong passcode, he is moved to the next try, and if he repeats the same for 3 consecutive times the device will go on a 60 sec colling period.

Now we can use this diagram to analyze the relation between the input and its behavioral change. Henceforth, we can make the test cases to test our system (Mobile passcode verification system) properly.

State Transition Table:
Similar to the above state transition diagram, we can also design our test case using the state transition table. The state transition table for this particular example is as follows:

States		Correct Passcode	Incorrect Passcode
1).Start	Go to step 2.		Go to step 2.
2).1st try	Go to step 5.		Go to step 3.
3).2nd try	Go to step 5.		Go to step 4.
4).3rd try	Go to step 5.		Go to step 6.
5).Access Grant	–	–
6).Cooling Period	–	–
********************************************************************************************

(e) Use Case Testing:
	As the name suggests, in Use case testing, we design our test cases based on the use cases (Usage of the software) of the software or application depending on the business logic and end-user functionalities. It is a black box testing technique that helps us identify test cases that constitute part of the whole system on a transaction basis from beginning to end.

A use case testing technique serves the following objective:

Manages scope conditions related to the project.
Depicts different manners by which any user may interact with the system or software.
Visualizes the system architecture.
Permits to assess the potential risks and system reliances.
Communicates complex technical necessities to relevant stakeholders easily.
Let’s understand the Use case testing technique using our last example of a mobile passcode verification system. Before moving on to the test case, let’s first assess the use cases for this particular system for the user.

The user may unlock the device on his/her first try.

(a) Someone may attempt to unlock the user’s device with the wrong passcode three consecutive times. Provide a cooling period in such a situation to avoid brute-force attacks.

(b) The device must not accept a passcode when the cooling period is active.

(c) The user should be able to unlock the device after the expiry of the cooling period by entering a correct passcode.

Now, analyzing these use cases can help us primarily design test cases for our system. These test cases can be either tested manually or automatically.
---------------------------------------------------------------
II. Structure-Based or White-Box Techniques:
Structure-Based testing, or White-Box testing, is a test case design technique for testing that focuses on testing the internal architecture, components, or the actual code of the software system. It is further classified into five categories:

(a) Statement Coverage
(b) Decision Coverage
(c) Condition Coverage
(d) Multiple Condition Coverage Testing
(e) All Path Coverage Testing
----------------------------------------------------------------
III. Experience-Based techniques:
As the name suggests, Experience-Based testing is a technique for testing that requires actual human experience to design the test cases. The outcomes of this technique are highly dependent on the knowledge, skills, and expertise of the automation tester or developer involved. It is broadly classified into the following types:

(a) Error Guessing:
	Error Guessing is an experience-based test case design technique where automation testers or developers use his/her experience to guess the troublesome areas of the software or application. This approach necessarily requires a skilled and experienced tester.

Error guessing test case design technique proves to be very persuasive when used in addition to other structured testing techniques. This technique uncovers those errors that would otherwise be not likely to be out through structured testing. Hence, having the tester’s experience saves a lot of effort and time.

Now, let’s find out some of the guidelines to be followed for error guessing technique:

(1) Always Remember earlier troubled areas: During any of the automation testing tasks, whenever you encounter an interesting bug, note it down for future reference.

(2) Improve technical understanding: Knowing how the code is written & how error-prone concepts like null pointers, loops, arrays, exceptions, boundaries, indexes, etc., are implemented in the code helps in testing.

(3) Do not just look for mistakes in the code but also find errors in design, requirements, build, usage, and testing.

(4) Apprehend the system under test.

(5) Consider historical data and test outcomes.
For example, error-guessing test case design techniques can pinpoint input validation situations, such as entering uncommon characters, long inputs, or unforeseen combinations of input values. Testers or developers can catch if the system controls such inputs gracefully or exhibits any unexpected behavior or vulnerabilities.

**************************************************************
(b) Exploratory Testing:
	Exploratory Testing is a type of test case design technique for software testing where automation testers or developers do not create test cases in advance but rather check the system in real-time. They usually note down ideas and opinions about what to test before test implementation. The primary focus of exploratory testing is mainly on testing as a "thinking" activity.

Exploratory Testing is widely used by testers in QA testing and is all about learning, discovery, and investigation. It highlights the personal responsibility and freedom of the individual tester.

For example, Exploratory testing for a user login feature involves testing beyond the basics. Testers would start by verifying standard login scenarios and then delve into boundary cases, such as incorrect credentials or excessive login attempts. They would assess features like social media integration, password recovery, and session management.

Exploratory testing would cover aspects like localization, accessibility, and cross-browser/device compatibility. By taking this approach, the testing team ensures a secure, user-friendly, and comprehensive login experience for the application’s users.
===========================================================================================================================

Q: Explain the test cases templates?
Ans: In our organization we are using excel sheet for writing the test cases. The Test case template is as follows:
(a) Sl. No
(b) TestCaseID: Same as Requirement ID
(c) Description: Test case name
(d) Priority: How important this test case based on business (H, M, L)
(e) Pre-Requisite: Condition required to start the test case execution.
(f) Step No.: no. of steps in every test case. This is used for estimation
(g) DetailedDescription: complete Steps in the test cases
(h) ExpectedResult: Every step should have expected result.
(i) ActualResult: It should be blank while writing test case, filled while execution.
(j) ExecuteTest: What are the test cases has to be executed.
(k) Status: Passed/Failed/Blocked/Pending etc
(l) ExecutedBy: Tester name who has executed the test case.
(m) DateOfExecution: Mention the date
(n) Author: Who has written the test case.
(o) Reviewer: Who has reviewed the test cases.
(p) ExecutedOnBuild: Mention the build number


Q: Are you still using excel file for writing the test cases?
Ans: There are so many tools are available to write the test cases viz., QC/ALM, TFS, TestRail, JIRA, TestLink, SPIRATEST etc.

Yes, you are absolutely correct. We are planning to migrate from excel sheet to TestLink in the next month. Hence training will be planned in the coming days.


Q: What is test scenario?
Ans: Writing a end-to-end workflow OR functionalities in the given requirements is k.a., Test scenario.

Example for test scenario:
FB Registration: below are the Test scenario's for FB registration.

1. Register to FB using phone number
2. Register to FB using email
3. Register to FB using your DB which is todays date (-)
4. Register to FB using your DOB days back to 100 years old. (-)
5. Register to FB using gender as Male
6. Register to FB using gender as Female
7. Register to FB using gender as Others
8. Register to FB using the details which you have already used. (-)



===================================================================
Q: How to proceed with writing testcases when the requirements are given?
Ans: Follow the below process to write the test cases:
 (a) Understand the requirements.
 (b) Write the test scenarios for the given requirements.
 (c) Write Test cases for all the derived scenarios by applying test case design technique. 
 (d) It should undergo review process.

Ex: Amount transfer is the Requirements. The example scenarios includes:
 (i) Amt transfer between same bank
 (ii) Amt transfer between different banks
 (iii) Amt transfer using Third party apps
 (iv) Amt transfer using blocked account (-)
 (v) Amt transfer with insufficient funds (-)
 (vii) Amt transfer using NEFT
 
After completion of writing scenarios send it for review OR set up a meeting for review. After review, write test cases for these scenarios by applying Test Case Design Techniques.

After writing the test cases, send those test cases for review.
===================================================================

Q: Who should write the test cases?
Ans: All the QA members should write the test cases based on the User stories/requirements assigned to them.


**Q: How many test cases you can write per day?
Ans: Around 15-30 testcases per day based on the complexity of the requirements.
Ex: Each test cases may have around 10-40+ steps


**Q: How many test cases you can execute manually per day?
Ans: Around 20-30 test cases per day based on the complexity of the requirements.


Q: What process you follow after writing the test cases?
Ans: In our organization after writing the testcases it will under go review process. First we perform internal review process. It will be done by BA, Dev, PEER etc. Once the internal review is completed & all the internal review comments are fixed, then the same test cases will be sent to external review. It will be done by customer.


Q: Did you involve in reviewing the test case?
Ans: Yes. As part of PEER review I have involved in test case review process.


Q: What are the factors you conside while reviewing test cases?
Ans: Following key points has to be considered:
(a) Test case coverage.
(b) Minimum steps with maximum coverage.
(c) Usage of proper test data for the test cases.
(d) proper pre-requisites whenever required.
(e) Usage of proper test case templates
(f) Every steps should have appropriate expected results
(g) All the test cases should have proper priority set.
(h) we make sure no duplicate test cases are written


**Q: On what basis you set the priority for the test cases?
And: Based on:
(a) Based on impact on the business.
(b) How frequently the customer uses that feature


Q: Write a test case for Pen?
Q: Write a test case for Water bottle?
Q: Write a test case for Fan?
Q: Write a test case for ATM machine?
Q: Write a test case for Paper?
Q: Write Test case for FB login?
Q: They will show you one picture of the application OR page and then ask you to write a test cases for the same?